Search Results

Documents authored by Weimann, Oren


Document
What Else Can Voronoi Diagrams Do for Diameter in Planar Graphs?

Authors: Amir Abboud, Shay Mozes, and Oren Weimann

Published in: LIPIcs, Volume 274, 31st Annual European Symposium on Algorithms (ESA 2023)


Abstract
The Voronoi diagrams technique, introduced by Cabello [SODA'17] to compute the diameter of planar graphs in subquadratic time, has revolutionized the field of distance computations in planar graphs. We present novel applications of this technique in static, fault-tolerant, and partially-dynamic undirected unweighted planar graphs, as well as some new limitations. - In the static case, we give n^{3+o(1)}/D² and Õ(n⋅D²) time algorithms for computing the diameter of a planar graph G with diameter D. These are faster than the state of the art Õ(n^{5/3}) [SODA'18] when D < n^{1/3} or D > n^{2/3}. - In the fault-tolerant setting, we give an n^{7/3+o(1)} time algorithm for computing the diameter of G⧵ {e} for every edge e in G (the replacement diameter problem). This should be compared with the naive Õ(n^{8/3}) time algorithm that runs the static algorithm for every edge. - In the incremental setting, where we wish to maintain the diameter while adding edges, we present an algorithm with total running time n^{7/3+o(1)}. This should be compared with the naive Õ(n^{8/3}) time algorithm that runs the static algorithm after every update. - We give a lower bound (conditioned on the SETH) ruling out an amortized O(n^{1-ε}) update time for maintaining the diameter in weighted planar graph. The lower bound holds even for incremental or decremental updates. Our upper bounds are obtained by novel uses and manipulations of Voronoi diagrams. These include maintaining the Voronoi diagram when edges of the graph are deleted, allowing the sites of the Voronoi diagram to lie on a BFS tree level (rather than on boundaries of r-division), and a new reduction from incremental diameter to incremental distance oracles that could be of interest beyond planar graphs. Our lower bound is the first lower bound for a dynamic planar graph problem that is conditioned on the SETH.

Cite as

Amir Abboud, Shay Mozes, and Oren Weimann. What Else Can Voronoi Diagrams Do for Diameter in Planar Graphs?. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 4:1-4:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{abboud_et_al:LIPIcs.ESA.2023.4,
  author =	{Abboud, Amir and Mozes, Shay and Weimann, Oren},
  title =	{{What Else Can Voronoi Diagrams Do for Diameter in Planar Graphs?}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{4:1--4:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.4},
  URN =		{urn:nbn:de:0030-drops-186575},
  doi =		{10.4230/LIPIcs.ESA.2023.4},
  annote =	{Keywords: Planar graphs, diameter, dynamic graphs, fault tolerance}
}
Document
Improved Compression of the Okamura-Seymour Metric

Authors: Shay Mozes, Nathan Wallheimer, and Oren Weimann

Published in: LIPIcs, Volume 248, 33rd International Symposium on Algorithms and Computation (ISAAC 2022)


Abstract
Let G = (V,E) be an undirected unweighted planar graph. Let S = {s_1,…,s_k} be the vertices of some face in G and let T ⊆ V be an arbitrary set of vertices. The Okamura-Seymour metric compression problem asks to compactly encode the S-to-T distances. Consider a vector storing the distances from an arbitrary vertex v to all vertices S = {s_1,…,s_k} in their cyclic order. The pattern of v is obtained by taking the difference between every pair of consecutive values of this vector. In STOC'19, Li and Parter used a VC-dimension argument to show that in planar graphs, the number of distinct patterns, denoted p_#, is only O(k³). This resulted in a simple Õ(min{k⁴+|T|, k⋅|T|}) space compression of the Okamura-Seymour metric. We give an alternative proof of the p_# = O(k³) bound that exploits planarity beyond the VC-dimension argument. Namely, our proof relies on cut-cycle duality, as well as on the fact that distances among vertices of S are bounded by k. Our method implies the following: (1) An Õ(p_#+k+|T|) space compression of the Okamura-Seymour metric, thus improving the compression of Li and Parter to Õ(min{k³+|T|, k⋅|T|}). (2) An optimal Õ(k+|T|) space compression of the Okamura-Seymour metric, in the case where the vertices of T induce a connected component in G. (3) A tight bound of p_# = Θ(k²) for the family of Halin graphs, whereas the VC-dimension argument is limited to showing p_# = O(k³).

Cite as

Shay Mozes, Nathan Wallheimer, and Oren Weimann. Improved Compression of the Okamura-Seymour Metric. In 33rd International Symposium on Algorithms and Computation (ISAAC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 248, pp. 27:1-27:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{mozes_et_al:LIPIcs.ISAAC.2022.27,
  author =	{Mozes, Shay and Wallheimer, Nathan and Weimann, Oren},
  title =	{{Improved Compression of the Okamura-Seymour Metric}},
  booktitle =	{33rd International Symposium on Algorithms and Computation (ISAAC 2022)},
  pages =	{27:1--27:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-258-7},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{248},
  editor =	{Bae, Sang Won and Park, Heejin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2022.27},
  URN =		{urn:nbn:de:0030-drops-173123},
  doi =		{10.4230/LIPIcs.ISAAC.2022.27},
  annote =	{Keywords: Shortest paths, planar graphs, metric compression, distance oracles}
}
Document
The Fine-Grained Complexity of Episode Matching

Authors: Philip Bille, Inge Li Gørtz, Shay Mozes, Teresa Anna Steiner, and Oren Weimann

Published in: LIPIcs, Volume 223, 33rd Annual Symposium on Combinatorial Pattern Matching (CPM 2022)


Abstract
Given two strings S and P, the Episode Matching problem is to find the shortest substring of S that contains P as a subsequence. The best known upper bound for this problem is Õ(nm) by Das et al. (1997), where n,m are the lengths of S and P, respectively. Although the problem is well studied and has many applications in data mining, this bound has never been improved. In this paper we show why this is the case by proving that no O((nm)^{1-ε}) algorithm (even for binary strings) exists, unless the Strong Exponential Time Hypothesis (SETH) is false. We then consider the indexing version of the problem, where S is preprocessed into a data structure for answering episode matching queries P. We show that for any τ, there is a data structure using O(n+(n/(τ)) ^k) space that answers episode matching queries for any P of length k in O(k⋅ τ ⋅ log log n) time. We complement this upper bound with an almost matching lower bound, showing that any data structure that answers episode matching queries for patterns of length k in time O(n^δ), must use Ω(n^{k-kδ-o(1)}) space, unless the Strong k-Set Disjointness Conjecture is false. Finally, for the special case of k = 2, we present a faster construction of the data structure using fast min-plus multiplication of bounded integer matrices.

Cite as

Philip Bille, Inge Li Gørtz, Shay Mozes, Teresa Anna Steiner, and Oren Weimann. The Fine-Grained Complexity of Episode Matching. In 33rd Annual Symposium on Combinatorial Pattern Matching (CPM 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 223, pp. 4:1-4:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bille_et_al:LIPIcs.CPM.2022.4,
  author =	{Bille, Philip and G{\o}rtz, Inge Li and Mozes, Shay and Steiner, Teresa Anna and Weimann, Oren},
  title =	{{The Fine-Grained Complexity of Episode Matching}},
  booktitle =	{33rd Annual Symposium on Combinatorial Pattern Matching (CPM 2022)},
  pages =	{4:1--4:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-234-1},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{223},
  editor =	{Bannai, Hideo and Holub, Jan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CPM.2022.4},
  URN =		{urn:nbn:de:0030-drops-161312},
  doi =		{10.4230/LIPIcs.CPM.2022.4},
  annote =	{Keywords: Pattern matching, fine-grained complexity, longest common subsequence}
}
Document
Track A: Algorithms, Complexity and Games
An Almost Optimal Edit Distance Oracle

Authors: Panagiotis Charalampopoulos, Paweł Gawrychowski, Shay Mozes, and Oren Weimann

Published in: LIPIcs, Volume 198, 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)


Abstract
We consider the problem of preprocessing two strings S and T, of lengths m and n, respectively, in order to be able to efficiently answer the following queries: Given positions i,j in S and positions a,b in T, return the optimal alignment score of S[i..j] and T[a..b]. Let N = mn. We present an oracle with preprocessing time N^{1+o(1)} and space N^{1+o(1)} that answers queries in log^{2+o(1)}N time. In other words, we show that we can efficiently query for the alignment score of every pair of substrings after preprocessing the input for almost the same time it takes to compute just the alignment of S and T. Our oracle uses ideas from our distance oracle for planar graphs [STOC 2019] and exploits the special structure of the alignment graph. Conditioned on popular hardness conjectures, this result is optimal up to subpolynomial factors. Our results apply to both edit distance and longest common subsequence (LCS). The best previously known oracle with construction time and size 𝒪(N) has slow Ω(√N) query time [Sakai, TCS 2019], and the one with size N^{1+o(1)} and query time log^{2+o(1)}N (using a planar graph distance oracle) has slow Ω(N^{3/2}) construction time [Long & Pettie, SODA 2021]. We improve both approaches by roughly a √ N factor.

Cite as

Panagiotis Charalampopoulos, Paweł Gawrychowski, Shay Mozes, and Oren Weimann. An Almost Optimal Edit Distance Oracle. In 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 198, pp. 48:1-48:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{charalampopoulos_et_al:LIPIcs.ICALP.2021.48,
  author =	{Charalampopoulos, Panagiotis and Gawrychowski, Pawe{\l} and Mozes, Shay and Weimann, Oren},
  title =	{{An Almost Optimal Edit Distance Oracle}},
  booktitle =	{48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)},
  pages =	{48:1--48:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-195-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{198},
  editor =	{Bansal, Nikhil and Merelli, Emanuela and Worrell, James},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2021.48},
  URN =		{urn:nbn:de:0030-drops-141175},
  doi =		{10.4230/LIPIcs.ICALP.2021.48},
  annote =	{Keywords: longest common subsequence, edit distance, planar graphs, Voronoi diagrams}
}
Document
Track A: Algorithms, Complexity and Games
On the Fine-Grained Complexity of Parity Problems

Authors: Amir Abboud, Shon Feller, and Oren Weimann

Published in: LIPIcs, Volume 168, 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)


Abstract
We consider the parity variants of basic problems studied in fine-grained complexity. We show that finding the exact solution is just as hard as finding its parity (i.e. if the solution is even or odd) for a large number of classical problems, including All-Pairs Shortest Paths (APSP), Diameter, Radius, Median, Second Shortest Path, Maximum Consecutive Subsums, Min-Plus Convolution, and 0/1-Knapsack. A direct reduction from a problem to its parity version is often difficult to design. Instead, we revisit the existing hardness reductions and tailor them in a problem-specific way to the parity version. Nearly all reductions from APSP in the literature proceed via the (subcubic-equivalent but simpler) Negative Weight Triangle (NWT) problem. Our new modified reductions also start from NWT or a non-standard parity variant of it. We are not able to establish a subcubic-equivalence with the more natural parity counting variant of NWT, where we ask if the number of negative triangles is even or odd. Perhaps surprisingly, we justify this by designing a reduction from the seemingly-harder Zero Weight Triangle problem, showing that parity is (conditionally) strictly harder than decision for NWT.

Cite as

Amir Abboud, Shon Feller, and Oren Weimann. On the Fine-Grained Complexity of Parity Problems. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 5:1-5:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{abboud_et_al:LIPIcs.ICALP.2020.5,
  author =	{Abboud, Amir and Feller, Shon and Weimann, Oren},
  title =	{{On the Fine-Grained Complexity of Parity Problems}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{5:1--5:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.5},
  URN =		{urn:nbn:de:0030-drops-124127},
  doi =		{10.4230/LIPIcs.ICALP.2020.5},
  annote =	{Keywords: All-pairs shortest paths, Fine-grained complexity, Diameter, Distance product, Min-plus convolution, Parity problems}
}
Document
Track A: Algorithms, Complexity and Games
Minimum Cut in O(m log² n) Time

Authors: Paweł Gawrychowski, Shay Mozes, and Oren Weimann

Published in: LIPIcs, Volume 168, 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)


Abstract
We give a randomized algorithm that finds a minimum cut in an undirected weighted m-edge n-vertex graph G with high probability in O(m log² n) time. This is the first improvement to Karger’s celebrated O(m log³ n) time algorithm from 1996. Our main technical contribution is a deterministic O(m log n) time algorithm that, given a spanning tree T of G, finds a minimum cut of G that 2-respects (cuts two edges of) T.

Cite as

Paweł Gawrychowski, Shay Mozes, and Oren Weimann. Minimum Cut in O(m log² n) Time. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 57:1-57:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{gawrychowski_et_al:LIPIcs.ICALP.2020.57,
  author =	{Gawrychowski, Pawe{\l} and Mozes, Shay and Weimann, Oren},
  title =	{{Minimum Cut in O(m log² n) Time}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{57:1--57:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.57},
  URN =		{urn:nbn:de:0030-drops-124646},
  doi =		{10.4230/LIPIcs.ICALP.2020.57},
  annote =	{Keywords: Minimum cut, Minimum 2-respecting cut}
}
Document
Complete Volume
LIPIcs, Volume 161, CPM 2020, Complete Volume

Authors: Inge Li Gørtz and Oren Weimann

Published in: LIPIcs, Volume 161, 31st Annual Symposium on Combinatorial Pattern Matching (CPM 2020)


Abstract
LIPIcs, Volume 161, CPM 2020, Complete Volume

Cite as

31st Annual Symposium on Combinatorial Pattern Matching (CPM 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 161, pp. 1-418, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@Proceedings{grtz_et_al:LIPIcs.CPM.2020,
  title =	{{LIPIcs, Volume 161, CPM 2020, Complete Volume}},
  booktitle =	{31st Annual Symposium on Combinatorial Pattern Matching (CPM 2020)},
  pages =	{1--418},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-149-8},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{161},
  editor =	{G{\o}rtz, Inge Li and Weimann, Oren},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CPM.2020},
  URN =		{urn:nbn:de:0030-drops-121245},
  doi =		{10.4230/LIPIcs.CPM.2020},
  annote =	{Keywords: LIPIcs, Volume 161, CPM 2020, Complete Volume}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: Inge Li Gørtz and Oren Weimann

Published in: LIPIcs, Volume 161, 31st Annual Symposium on Combinatorial Pattern Matching (CPM 2020)


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

31st Annual Symposium on Combinatorial Pattern Matching (CPM 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 161, pp. 0:i-0:xvi, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{grtz_et_al:LIPIcs.CPM.2020.0,
  author =	{G{\o}rtz, Inge Li and Weimann, Oren},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{31st Annual Symposium on Combinatorial Pattern Matching (CPM 2020)},
  pages =	{0:i--0:xvi},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-149-8},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{161},
  editor =	{G{\o}rtz, Inge Li and Weimann, Oren},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CPM.2020.0},
  URN =		{urn:nbn:de:0030-drops-121252},
  doi =		{10.4230/LIPIcs.CPM.2020.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
Top Tree Compression of Tries

Authors: Philip Bille, Paweł Gawrychowski, Inge Li Gørtz, Gad M. Landau, and Oren Weimann

Published in: LIPIcs, Volume 149, 30th International Symposium on Algorithms and Computation (ISAAC 2019)


Abstract
We present a compressed representation of tries based on top tree compression [ICALP 2013] that works on a standard, comparison-based, pointer machine model of computation and supports efficient prefix search queries. Namely, we show how to preprocess a set of strings of total length n over an alphabet of size sigma into a compressed data structure of worst-case optimal size O(n/log_sigma n) that given a pattern string P of length m determines if P is a prefix of one of the strings in time O(min(m log sigma,m + log n)). We show that this query time is in fact optimal regardless of the size of the data structure. Existing solutions either use Omega(n) space or rely on word RAM techniques, such as tabulation, hashing, address arithmetic, or word-level parallelism, and hence do not work on a pointer machine. Our result is the first solution on a pointer machine that achieves worst-case o(n) space. Along the way, we develop several interesting data structures that work on a pointer machine and are of independent interest. These include an optimal data structures for random access to a grammar-compressed string and an optimal data structure for a variant of the level ancestor problem.

Cite as

Philip Bille, Paweł Gawrychowski, Inge Li Gørtz, Gad M. Landau, and Oren Weimann. Top Tree Compression of Tries. In 30th International Symposium on Algorithms and Computation (ISAAC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 149, pp. 4:1-4:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{bille_et_al:LIPIcs.ISAAC.2019.4,
  author =	{Bille, Philip and Gawrychowski, Pawe{\l} and G{\o}rtz, Inge Li and Landau, Gad M. and Weimann, Oren},
  title =	{{Top Tree Compression of Tries}},
  booktitle =	{30th International Symposium on Algorithms and Computation (ISAAC 2019)},
  pages =	{4:1--4:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-130-6},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{149},
  editor =	{Lu, Pinyan and Zhang, Guochuan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2019.4},
  URN =		{urn:nbn:de:0030-drops-115000},
  doi =		{10.4230/LIPIcs.ISAAC.2019.4},
  annote =	{Keywords: pattern matching, tree compression, top trees, pointer machine}
}
Document
Near-Optimal Distance Emulator for Planar Graphs

Authors: Hsien-Chih Chang, Pawel Gawrychowski, Shay Mozes, and Oren Weimann

Published in: LIPIcs, Volume 112, 26th Annual European Symposium on Algorithms (ESA 2018)


Abstract
Given a graph G and a set of terminals T, a distance emulator of G is another graph H (not necessarily a subgraph of G) containing T, such that all the pairwise distances in G between vertices of T are preserved in H. An important open question is to find the smallest possible distance emulator. We prove that, given any subset of k terminals in an n-vertex undirected unweighted planar graph, we can construct in O~(n) time a distance emulator of size O~(min(k^2,sqrt{k * n})). This is optimal up to logarithmic factors. The existence of such distance emulator provides a straightforward framework to solve distance-related problems on planar graphs: Replace the input graph with the distance emulator, and apply whatever algorithm available to the resulting emulator. In particular, our result implies that, on any unweighted undirected planar graph, one can compute all-pairs shortest path distances among k terminals in O~(n) time when k=O(n^{1/3}).

Cite as

Hsien-Chih Chang, Pawel Gawrychowski, Shay Mozes, and Oren Weimann. Near-Optimal Distance Emulator for Planar Graphs. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 16:1-16:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{chang_et_al:LIPIcs.ESA.2018.16,
  author =	{Chang, Hsien-Chih and Gawrychowski, Pawel and Mozes, Shay and Weimann, Oren},
  title =	{{Near-Optimal Distance Emulator for Planar Graphs}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{16:1--16:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.16},
  URN =		{urn:nbn:de:0030-drops-94796},
  doi =		{10.4230/LIPIcs.ESA.2018.16},
  annote =	{Keywords: planar graphs, shortest paths, metric compression, distance preservers, distance emulators, distance oracles}
}
Document
A Faster Construction of Greedy Consensus Trees

Authors: Pawel Gawrychowski, Gad M. Landau, Wing-Kin Sung, and Oren Weimann

Published in: LIPIcs, Volume 107, 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)


Abstract
A consensus tree is a phylogenetic tree that captures the similarity between a set of conflicting phylogenetic trees. The problem of computing a consensus tree is a major step in phylogenetic tree reconstruction. It is also central for predicting a species tree from a set of gene trees, as indicated recently in [Nature 2013]. This paper focuses on two of the most well-known and widely used consensus tree methods: the greedy consensus tree and the frequency difference consensus tree. Given k conflicting trees each with n leaves, the previous fastest algorithms for these problems were O(k n^2) for the greedy consensus tree [J. ACM 2016] and O~(min{k n^2, k^2n}) for the frequency difference consensus tree [ACM TCBB 2016]. We improve these running times to O~(k n^{1.5}) and O~(k n) respectively.

Cite as

Pawel Gawrychowski, Gad M. Landau, Wing-Kin Sung, and Oren Weimann. A Faster Construction of Greedy Consensus Trees. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 63:1-63:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{gawrychowski_et_al:LIPIcs.ICALP.2018.63,
  author =	{Gawrychowski, Pawel and Landau, Gad M. and Sung, Wing-Kin and Weimann, Oren},
  title =	{{A Faster Construction of Greedy Consensus Trees}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{63:1--63:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.63},
  URN =		{urn:nbn:de:0030-drops-90676},
  doi =		{10.4230/LIPIcs.ICALP.2018.63},
  annote =	{Keywords: phylogenetic trees, greedy consensus trees, dynamic trees}
}
Document
A Faster FPTAS for #Knapsack

Authors: Pawel Gawrychowski, Liran Markin, and Oren Weimann

Published in: LIPIcs, Volume 107, 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)


Abstract
Given a set W = {w_1,..., w_n} of non-negative integer weights and an integer C, the #Knapsack problem asks to count the number of distinct subsets of W whose total weight is at most C. In the more general integer version of the problem, the subsets are multisets. That is, we are also given a set {u_1,..., u_n} and we are allowed to take up to u_i items of weight w_i. We present a deterministic FPTAS for #Knapsack running in O(n^{2.5}epsilon^{-1.5}log(n epsilon^{-1})log (n epsilon)) time. The previous best deterministic algorithm [FOCS 2011] runs in O(n^3 epsilon^{-1} log(n epsilon^{-1})) time (see also [ESA 2014] for a logarithmic factor improvement). The previous best randomized algorithm [STOC 2003] runs in O(n^{2.5} sqrt{log (n epsilon^{-1})} + epsilon^{-2} n^2) time. Therefore, for the case of constant epsilon, we close the gap between the O~(n^{2.5}) randomized algorithm and the O~(n^3) deterministic algorithm. For the integer version with U = max_i {u_i}, we present a deterministic FPTAS running in O(n^{2.5}epsilon^{-1.5}log(n epsilon^{-1} log U)log (n epsilon) log^2 U) time. The previous best deterministic algorithm [TCS 2016] runs in O(n^3 epsilon^{-1}log(n epsilon^{-1} log U) log^2 U) time.

Cite as

Pawel Gawrychowski, Liran Markin, and Oren Weimann. A Faster FPTAS for #Knapsack. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 64:1-64:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{gawrychowski_et_al:LIPIcs.ICALP.2018.64,
  author =	{Gawrychowski, Pawel and Markin, Liran and Weimann, Oren},
  title =	{{A Faster FPTAS for #Knapsack}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{64:1--64:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.64},
  URN =		{urn:nbn:de:0030-drops-90687},
  doi =		{10.4230/LIPIcs.ICALP.2018.64},
  annote =	{Keywords: knapsack, approximate counting, K-approximating sets and functions}
}
Document
Dispersion on Trees

Authors: Pawel Gawrychowski, Nadav Krasnopolsky, Shay Mozes, and Oren Weimann

Published in: LIPIcs, Volume 87, 25th Annual European Symposium on Algorithms (ESA 2017)


Abstract
In the k-dispersion problem, we need to select k nodes of a given graph so as to maximize the minimum distance between any two chosen nodes. This can be seen as a generalization of the independent set problem, where the goal is to select nodes so that the minimum distance is larger than 1. We design an optimal O(n) time algorithm for the dispersion problem on trees consisting of n nodes, thus improving the previous O(n log n) time solution from 1997. We also consider the weighted case, where the goal is to choose a set of nodes of total weight at least W. We present an O(n log^2n) algorithm improving the previous O(n log^4 n) solution. Our solution builds on the search version (where we know the minimum distance lambda between the chosen nodes) for which we present tight Theta(n log n) upper and lower bounds.

Cite as

Pawel Gawrychowski, Nadav Krasnopolsky, Shay Mozes, and Oren Weimann. Dispersion on Trees. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 40:1-40:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{gawrychowski_et_al:LIPIcs.ESA.2017.40,
  author =	{Gawrychowski, Pawel and Krasnopolsky, Nadav and Mozes, Shay and Weimann, Oren},
  title =	{{Dispersion on Trees}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{40:1--40:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.40},
  URN =		{urn:nbn:de:0030-drops-78438},
  doi =		{10.4230/LIPIcs.ESA.2017.40},
  annote =	{Keywords: parametric search, dispersion, k-center, dynamic programming}
}
Document
The Nearest Colored Node in a Tree

Authors: Pawel Gawrychowski, Gad M. Landau, Shay Mozes, and Oren Weimann

Published in: LIPIcs, Volume 54, 27th Annual Symposium on Combinatorial Pattern Matching (CPM 2016)


Abstract
We start a systematic study of data structures for the nearest colored node problem on trees. Given a tree with colored nodes and weighted edges, we want to answer queries (v,c) asking for the nearest node to node v that has color c. This is a natural generalization of the well-known nearest marked ancestor problem. We give an O(n)-space O(log log n)-query solution and show that this is optimal. We also consider the dynamic case where updates can change a node's color and show that in O(n) space we can support both updates and queries in O(log n) time. We complement this by showing that O(polylog n) update time implies Omega(log n \ log log n) query time. Finally, we consider the case where updates can change the edges of the tree (link-cut operations). There is a known (top-tree based) solution that requires update time that is roughly linear in the number of colors. We show that this solution is probably optimal by showing that a strictly sublinear update time implies a strictly subcubic time algorithm for the classical all pairs shortest paths problem on a general graph. We also consider versions where the tree is rooted, and the query asks for the nearest ancestor/descendant of node v that has color c, and present efficient data structures for both variants in the static and the dynamic setting.

Cite as

Pawel Gawrychowski, Gad M. Landau, Shay Mozes, and Oren Weimann. The Nearest Colored Node in a Tree. In 27th Annual Symposium on Combinatorial Pattern Matching (CPM 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 54, pp. 25:1-25:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{gawrychowski_et_al:LIPIcs.CPM.2016.25,
  author =	{Gawrychowski, Pawel and Landau, Gad M. and Mozes, Shay and Weimann, Oren},
  title =	{{The Nearest Colored Node in a Tree}},
  booktitle =	{27th Annual Symposium on Combinatorial Pattern Matching (CPM 2016)},
  pages =	{25:1--25:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-012-5},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{54},
  editor =	{Grossi, Roberto and Lewenstein, Moshe},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CPM.2016.25},
  URN =		{urn:nbn:de:0030-drops-60674},
  doi =		{10.4230/LIPIcs.CPM.2016.25},
  annote =	{Keywords: Marked ancestor, Vertex-label distance oracles, Nearest colored descend- ant, Top-trees}
}
Document
Improved Bounds for Online Preemptive Matching

Authors: Leah Epstein, Asaf Levin, Danny Segev, and Oren Weimann

Published in: LIPIcs, Volume 20, 30th International Symposium on Theoretical Aspects of Computer Science (STACS 2013)


Abstract
When designing a preemptive online algorithm for the maximum matching problem, we wish to maintain a valid matching M while edges of the underlying graph are presented one after the other. When presented with an edge e, the algorithm should decide whether to augment the matching M by adding e (in which case e may be removed later on) or to keep M in its current form without adding e (in which case e is lost for good). The objective is to eventually hold a matching M with maximum weight. The main contribution of this paper is to establish new lower and upper bounds on the competitive ratio achievable by preemptive online algorithms: - We provide a lower bound of 1 + ln 2 \approx 1.693 on the competitive ratio of any randomized algorithm for the maximum cardinality matching problem, thus improving on the currently best known bound of e / (e-1) \approx 1.581 due to Karp, Vazirani, and Vazirani [STOC'90]. - We devise a randomized algorithm that achieves an expected competitive ratio of 5.356 for maximum weight matching. This finding demonstrates the power of randomization in this context, showing how to beat the tight bound of 3 + 2\sqrt{2} \approx 5.828 for deterministic algorithms, obtained by combining the 5.828 upper bound of McGregor [APPROX'05] and the recent 5.828 lower bound of Varadaraja [ICALP'11].

Cite as

Leah Epstein, Asaf Levin, Danny Segev, and Oren Weimann. Improved Bounds for Online Preemptive Matching. In 30th International Symposium on Theoretical Aspects of Computer Science (STACS 2013). Leibniz International Proceedings in Informatics (LIPIcs), Volume 20, pp. 389-399, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@InProceedings{epstein_et_al:LIPIcs.STACS.2013.389,
  author =	{Epstein, Leah and Levin, Asaf and Segev, Danny and Weimann, Oren},
  title =	{{Improved Bounds for Online Preemptive Matching}},
  booktitle =	{30th International Symposium on Theoretical Aspects of Computer Science (STACS 2013)},
  pages =	{389--399},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-50-7},
  ISSN =	{1868-8969},
  year =	{2013},
  volume =	{20},
  editor =	{Portier, Natacha and Wilke, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2013.389},
  URN =		{urn:nbn:de:0030-drops-39501},
  doi =		{10.4230/LIPIcs.STACS.2013.389},
  annote =	{Keywords: Online algorithms, matching, lower bound}
}
Document
Optimal Packed String Matching

Authors: Oren Ben-Kiki, Philip Bille, Dany Breslauer, Leszek Gasieniec, Roberto Grossi, and Oren Weimann

Published in: LIPIcs, Volume 13, IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)


Abstract
In the packed string matching problem, each machine word accomodates alpha characters, thus an n-character text occupies n/alpha memory words. We extend the Crochemore-Perrin constant-space O(n)-time string matching algorithm to run in optimal O(n/alpha) time and even in real-time, achieving a factor alpha speedup over traditional algorithms that examine each character individually. Our solution can be efficiently implemented, unlike prior theoretical packed string matching work. We adapt the standard RAM model and only use its AC0 instructions (i.e. no multiplication) plus two specialized AC0 packed string instructions. The main string-matching instruction is available in commodity processors (i.e. Intel's SSE4.2 and AVX Advanced String Operations); the other maximal-suffix instruction is only required during pattern preprocessing. In the absence of these two specialized instructions, we propose theoretically-efficient emulation using integer multiplication (not AC0) and table lookup.

Cite as

Oren Ben-Kiki, Philip Bille, Dany Breslauer, Leszek Gasieniec, Roberto Grossi, and Oren Weimann. Optimal Packed String Matching. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, pp. 423-432, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{benkiki_et_al:LIPIcs.FSTTCS.2011.423,
  author =	{Ben-Kiki, Oren and Bille, Philip and Breslauer, Dany and Gasieniec, Leszek and Grossi, Roberto and Weimann, Oren},
  title =	{{Optimal Packed String Matching}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  pages =	{423--432},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011.423},
  URN =		{urn:nbn:de:0030-drops-33558},
  doi =		{10.4230/LIPIcs.FSTTCS.2011.423},
  annote =	{Keywords: String matching, bit parallelism, real time, space efficiency}
}
Document
A Unified Algorithm for Accelerating Edit-Distance Computation via Text-Compression

Authors: Danny Hermelin, Gad M. Landau, Shir Landau, and Oren Weimann

Published in: LIPIcs, Volume 3, 26th International Symposium on Theoretical Aspects of Computer Science (2009)


Abstract
The edit distance problem is a classical fundamental problem in computer science in general, and in combinatorial pattern matching in particular. The standard dynamic-programming solution for this problem computes the edit-distance between a pair of strings of total length $O(N)$ in $O(N^2)$ time. To this date, this quadratic upper-bound has never been substantially improved for general strings. However, there are known techniques for breaking this bound in case the strings are known to compress well under a particular compression scheme. The basic idea is to first compress the strings, and then to compute the edit distance between the compressed strings. As it turns out, practically all known $o(N^2)$ edit-distance algorithms work, in some sense, under the same paradigm described above. It is therefore natural to ask whether there is a single edit-distance algorithm that works for strings which are compressed under any compression scheme. A rephrasing of this question is to ask whether a single algorithm can exploit the compressibility properties of strings under any compression method, even if each string is compressed using a different compression. In this paper we set out to answer this question by using \emph{straight-line programs}. These provide a generic platform for representing many popular compression schemes including the LZ-family, Run-Length Encoding, Byte-Pair Encoding, and dictionary methods. For two strings of total length $N$ having straight-line program representations of total size $n$, we present an algorithm running in $O(n^{1.4}N^{1.2})$ time for computing the edit-distance of these two strings under any rational scoring function, and an $O(n^{1.34}N^{1.34})$-time algorithm for arbitrary scoring functions. This improves on a recent algorithm of Tiskin that runs in $O(nN^{1.5})$ time, and works only for rational scoring functions.

Cite as

Danny Hermelin, Gad M. Landau, Shir Landau, and Oren Weimann. A Unified Algorithm for Accelerating Edit-Distance Computation via Text-Compression. In 26th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 3, pp. 529-540, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2009)


Copy BibTex To Clipboard

@InProceedings{hermelin_et_al:LIPIcs.STACS.2009.1804,
  author =	{Hermelin, Danny and Landau, Gad M. and Landau, Shir and Weimann, Oren},
  title =	{{A Unified Algorithm for Accelerating Edit-Distance Computation via Text-Compression}},
  booktitle =	{26th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{529--540},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-09-5},
  ISSN =	{1868-8969},
  year =	{2009},
  volume =	{3},
  editor =	{Albers, Susanne and Marion, Jean-Yves},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2009.1804},
  URN =		{urn:nbn:de:0030-drops-18040},
  doi =		{10.4230/LIPIcs.STACS.2009.1804},
  annote =	{Keywords: Edit distance, Straight-line Programs, Dynamic programming acceleration via compression, Combinatorial pattern matching}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail